Learning for Decentralized Control of Multiagent Systems in Large, Partially-Observable Stochastic Environments

نویسندگان

  • Miao Liu
  • Christopher Amato
  • Emily P. Anesta
  • John Daniel Griffith
  • Jonathan P. How
چکیده

Decentralized partially observable Markov decision processes (Dec-POMDPs) provide a general framework for multiagent sequential decision-making under uncertainty. Although Dec-POMDPs are typically intractable to solve for real-world problems, recent research on macro-actions (i.e., temporally-extended actions) has significantly increased the size of problems that can be solved. However, current methods assume the underlying Dec-POMDP model is known a priori or a full simulator is available during planning time. To accommodate more realistic scenarios, when such information is not available, this paper presents a policy-based reinforcement learning approach, which learns the agent policies based solely on trajectories generated by previous interaction with the environment (e.g., demonstrations). We show that our approach is able to generate valid macro-action controllers and develop an expectationmaximization (EM) algorithm (called Policy-based EM or PoEM), which has convergence guarantees for batch learning. Our experiments show PoEM is a scalable learning method that can learn optimal policies and improve upon hand-coded “expert” solutions.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning for Multiagent Decentralized Control in Large Partially Observable Stochastic Environments

This paper presents a probabilistic framework for learning decentralized control policies for cooperative multiagent systems operating in a large partially observable stochastic environment based on batch data (trajectories). In decentralized domains, because of communication limitations, the agents cannot share their entire belief states, so execution must proceed based on local information. D...

متن کامل

Applications of DEC-MDPs in multi-robot systems

Optimizing the operation of cooperative multi-robot systems that can cooperatively act in large and complex environments has become an important focal area of research. This issue is motivated by many applications involving a set of cooperative robots that have to decide in a decentralized way how to execute a large set of tasks in partially observable and uncertain environments. Such decision ...

متن کامل

The MADP Toolbox: An Open Source Library for Planning and Learning in (Multi-)Agent Systems

This article describes the Multiagent Decision Process (MADP) Toolbox, a software library to support planning and learning for intelligent agents and multiagent systems in uncertain environments. Key features are that it supports partially observable environments and stochastic transition models; has unified support for singleand multiagent systems; provides a large number of models for decisio...

متن کامل

Reinforcement Learning for Control

Reinforcement learning (RL) offers a principled way to control nonlinear stochastic systems with partly or even fully unknown dynamics. Recent advances in areas such as deep learning and adaptive dynamic programming (ADP) have led to significant inroads in applications from robotics, automotive systems, smart grids, game playing, traffic control, etc. This open track provides a forum of interac...

متن کامل

Multiagent Expedition with Graphical Models

We investigate a class of multiagent planning problems termed multiagent expedition, where agents move around an open, unknown, partially observable, stochastic, and physical environment, in pursuit of multiple and alternative goals of different utility. Optimal planning in multiagent expedition is highly intractable.We introduce the notion of conditional optimality, decompose the task into a s...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016